WSEAS Transactions on Power Systems


Print ISSN: 1790-5060
E-ISSN: 2224-350X

Volume 12, 2017

Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of WSEAS Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.



Industrial process steam consumption prediction through an Artificial Neural Networks (ANNs) approach

AUTHORS: Fitsum Bekele Tilahun, Ramchandra Bhandari, Menegesha Mamo

Download as PDF

ABSTRACT: Current research studies have demonstrated the capability of Artificial Neural Networks (ANNs) in learning to generalize for solving complex industrial problems. However, hardly few such studies have been conducted to investigate if these ANNs are also effective in identifying energy use patterns in industrial processes. In this research work a resilient gradient descent variant of a multilayer neural network (MLP) is developed for determining steam consumption patterns as a function of production rate in textile factory. The model is tested using real-time data from each steam-consuming machine’s daily production and a meter reading of an electrical steam boiler. Parts of these data (85%) were randomly selected in order to train the network. The remaining data were used to test the performance of the trained network. The result obtained showed an acceptable error performance index of magnitude around 0.0674. The model also gave a correlation coefficient (R) between the estimated and target values as 0. 9781. Thus the proposed neural network can be used as a valuable tool as an energy use approximator in industrial production processes. Moreover, with the availability of more training data, an increased prediction capability can be achieved.

KEYWORDS: Artificial Neural Networks (ANNs), multilayer neural network (MLP), resilient gradient descent industrial processes, steam consumption prediction

REFERENCES:

[1] Yu-Rong Zeng, Yi Zeng, Beomjin Choi, Lin Wang. Multifactor-Influenced Energy Consumption Forecasting Using Enhanced Back- propagation Neural Network. Energy, 2017; 127:381-396.

[2] Uzlu E, Kankal M, Akpınar A, Dede T. Estimates of energy consumption in Turkey using neural networks with the teaching– learning-based optimization algorithm. Energy 2014; 75: 295-303.

[3] Martin T. Hagan, Howard B. Demuth. Neural Network Design 2nd Edtion, 2014.

[4] Alaa Ali Hameed, Bekir Karlik, Mohammad Shukri Salman. Back-propagation Algorithm with Variable Adaptive Momentum. Knowledge-Based Systems,2016; 114:79-87.

[5] C.G. Looney, “Advances in feedforward neural networks: demystifying knowledge acquiring black boxes”, IEEE Transactions on Knowledge and Data Engineering, Volume: 8, Issue: 2,1996

[6] “Energy Audit of Bahir Dar Textile Share Company, Ethiopia”, Bangalore: The Energy and Resources Institute; 53 pp., Project Report No. 2013IB22, 2014

[7] J. Sola, “Importance of input data normalization for the application of neural networks to complex industrial problems”, IEEE Transactions on Nuclear Science, Volume: 44, Issue: 3, 1997

[8] Zhang Q., Sun S. Weighted Data Normalization Based on Eigenvalues for Artificial Neural Network Classification. In: Leung C.S., Lee M., Chan J.H. (eds) Neural Information Processing. ICONIP. Lecture Notes in Computer Science, Springer, 2009; 5863.

[9] N. Murata, S. Yoshizawa & S. Amari, “Network information criterion-determining the number of hidden units for an artificial neural network model”, IEEE Transactions on Neural Networks, Volume: 5, Issue: 6, 1994

[10] Saduf Afzal, Mohd. Arif Wani “Comparative Study of Adaptive Learning Rate with Momentum and Resilient Back Propagation Algorithms for Neural Net Classifier Optimization”

[11] Wahed, M. A “Adaptive learning rate versus Resilient back propagation for numeral recognition” Journal of Al-Anbar University for Pure Science, 94-105,2008

[12] D. Erdogmus. Accurate initialization of neural network weights by backpropagation of the desired response. Proceedings of the International Joint Conference on Neural Networks, 2003

[13] Go J., Baek B., Lee C. Analyzing Weight Distribution of Feedforward Neural Networks and Efficient Weight Initialization. In: Fred A., Caelli T.M., Duin R.P.W., Campilho A.C., de Ridder D. (eds) Structural, Syntactic, and Statistical Pattern Recognition. Lecture Notes in Computer Science, Springer, 2004;3138

[14] Weipeng Cao, Xizhao Wang, Zhong Ming, Jinzhu Gao, A Review on Neural Networks with Random Weights, Neurocomputing, 2017; In Press, Corrected Proof — Note to users.

[15] E. Barnard, “Optimization for training neural nets,” IEEE Trans. on Neural Networks, vol. 3, no. 2, pp. 232–240, 1992.

[16] T. P. Vogl, J. K. Mangis, A. K. Zigler, W. T. Zink and D. L. Alkon, “Accelerating the convergence of the backpropagation method,” Biological Cybernetics., vol. 59, pp. 256–264, 1988.

[17] Liu, Y., Starzyk, J.A., Zhu, Z.,. Optimized approximation algorithm in neural networks without overfitting. IEEE Trans. Neural Networks 19 (6), 2008; 983–995.

[18] Masoud Yaghinin, Mohammad M. Khoshraftar, Mehdi Fallahi. A hybrid algorithm for artificial neural network training. Engineering Applications of Artificial Intelligence, 2013:26:293-301.

WSEAS Transactions on Power Systems, ISSN / E-ISSN: 1790-5060 / 2224-350X, Volume 12, 2017, Art. #28, pp. 238-247


Copyright © 2017 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution License 4.0

Bulletin Board

Currently:

The editorial board is accepting papers.


WSEAS Main Site